discriminatory ai
Discriminatory or Samaritan -- which AI is needed for humanity? An Evolutionary Game Theory Analysis of Hybrid Human-AI populations
Booker, Tim, Miranda, Manuel, López, Jesús A. Moreno, Fernández, José María Ramos, Reddel, Max, Widler, Valeria, Zimmaro, Filippo, Antonioni, Alberto, Han, The Anh
As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making, and social interactions. Existing theoretical research has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. In this paper, resorting to methods from evolutionary game theory, we study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game in both well-mixed and structured populations. We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only help those considered worthy/cooperative, especially in slow-moving societies where change is viewed with caution or resistance (small intensities of selection). Intuitively, in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (5 more...)
- Leisure & Entertainment > Games (0.70)
- Government (0.46)
Discriminatory AI explained with an example
AI is increasingly used in making decisions that impact us directly such as job applications, our credit rating, match-making on dating sites. So it is important that AI is non-discriminatory and that decisions do not favor certain races, gender, the color of skin. Discriminatory AI is a very wide subject going beyond purely technical aspects. However, to make it easily understandable, I will demonstrate how discriminatory AI looks using examples and visuals. This will give you a way to spot a discriminatory AI. Let me first establish the context of the example.
UK watchdogs to clamp down on banks using discriminatory AI in loan applications
The news: UK regulators have signaled that they will clamp down on artificial intelligence (AI) use in banking that might be used to discriminate against people, per the FT. Banks which use AI to approve loan applications must be able to prove the tech will not worsen discrimination against minorities. The bigger picture: AI is a significant growth area in banking. Its market size is projected to soar globally from $3.88 billion in 2020 to $64.03 billion in 2030, with a CAGR of 32.6%, per a Research and Markets report. AI in banking is maturing, and as data analysis improves, it brings the potential for more accurate decision-making.
FTC warns the AI industry: Don't discriminate, or else
The U.S. Federal Trade Commission just fired a shot across the bow of the artificial intelligence industry. On April 19, 2021, a staff attorney at the agency, which serves as the nation's leading consumer protection authority, wrote a blog post about biased AI algorithms that included a blunt warning: "Keep in mind that if you don't hold yourself accountable, the FTC may do it for you." The post, titled "Aiming for truth, fairness, and equity in your company's use of AI," was notable for its tough and specific rhetoric about discriminatory AI. The author observed that the commission's authority to prohibit unfair and deceptive practices "would include the sale or use of – for example – racially biased algorithms" and that industry exaggerations regarding the capability of AI to make fair or unbiased hiring decisions could result in "deception, discrimination – and an FTC law enforcement action." Bias seems to pervade the AI industry.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Human rights groups are calling for an end to discriminatory AI
"Existing patterns of structural discrimination may be reproduced and aggravated in situations that are particular to these technologies – for example, machine learning system goals that create self-fulfilling markers of success and reinforce patterns of inequality, or issues arising from using non-representative or "biased" datasets.
Stop Using Discriminatory AI, Human Rights Groups Say - Scribble & Scroll
When it comes to developing artificial intelligence, President Trump may want a free-market approach. But a number of experts disagree -- we need guidelines to protect people from discriminatory algorithms. Today, a group of humans rights organizations such as Human Rights Watch, Amnesty International, The Wikimedia Foundation, Access Now, and others called on governments and technology companies to adopt guiding principles to protect human rights. As part of today's RightsCon Toronto symposium, the organizations joined to pen the Toronto Declaration on Machine Learning, which can be found in full on Access Now's website. The declaration calls for engineers to develop and revisit algorithms with the explicit goal of promoting transparency and equality while working to end algorithm-propagated racism and discrimination.